49 research outputs found

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    HairBrush for Immersive Data-Driven Hair Modeling

    Get PDF
    International audienceWhile hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair author-ing interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques

    Simulation of fluids with reduced diffusion, thin liquid films, volume control, and a mesh filter in rational form

    No full text
    This thesis is concerned with the evolution of implicit or explicit surfaces. The first part of this thesis addresses three problems in fluid simulation: advection, thin film, and the volume error. First, we show that the back and forth error compensation and correction (BFECC) method can significantly reduce the dissipation and diffusion. Second, thin film is hard to simulate since it has highly complex liquid/gas interface that requires high memory and computational costs. We address this difficulties by using cell centered octree grid to reduce memory cost and a multigrid method to reduce computational cost. Third, the volume loss is an undesired side effect of the level set method. The known solution to this problem is the particle level set method, which is expensive and has small but accumulating volume error. We provide a solution that is computationally effective and can prevent volume loss without accumulation. The second part of this thesis is focused on filtering a triangle mesh to produce a mesh whose details are selectively reduced or amplified. We develop a mesh filter with a rational transfer function, which is a generalization from previously developed mesh filters. In addition, we show that the mesh filter parameters can be computed from the physical size of mesh feature.Ph.D.Committee Chair: Jarek Rossignac; Committee Member: Greg Turk; Committee Member: Irfan Essa; Committee Member: Xiangmin Jiao; Committee Member: Yingjie Li

    Collision prediction for polyhedra under screw motions

    Full text link

    Collision Prediction

    No full text
    The prediction of collisions amongst N rigid objects may be reduced to a series of computations of the time to first contact for all pairs of objects. Simple enclosing bounds and hierarchical partitions of the space-time domain are often used to avoid testing object-pairs that clearly will not collide. When the remaining pairs involve only polyhedra under straight-line translation, the exact computation of the collision time and of the contacts requires only solving for intersections between linear geometries. When a pair is subject to a more general relative motion, such a direct collision prediction calculation may be intractable. The popular brute force collision detection strategy of executing the motion for a series of small time steps and of checking for static interferences after each step is often computationally prohibitive. We propose instead a less expensive collision prediction strategy, where we approximate the relative motion between pairs of objects by a sequence of screw motion segments, each defined by the relative position and orientation of the two objects at the beginning and at the end of the segment. We reduce the computation of the exact collision time and of the corresponding face/vertex and edge/edge collision points to the numeric extraction of the roots of simple univariate analytic functions. Furthermore, we propose a series of simple rejection tests, which exploit the particularity of the screw motion to immediately decide that some objects do not collide or to speed-up the prediction of collisions by about 30%, avoiding on average 3/4 of the root-finding queries even when the object actually collide

    Video-based nonphotorealistic and expressive illustration of motion

    No full text
    We present a semi-automatic approach for adding expressive renderings to images and videos that highlight motions and movement. Our technique relies on motion analysis of video where the motion information from the image sequence is used to add expressive information. The first step in our approach is to extract a moving region of the video by segmenting and then grouping regions of compatible motions. In the second step, a user can interactively choose or refine a grouping region that represents the moving object of interest. In the third and final stage, the user can apply various visual effects such as a temporal-flare, time-lapse, and particle-effects. We have implemented a prototype system that can be used to illustrate and expressively render motions in videos and images, with simple user interaction. Our system can deal with most translational and rotational motions without a need for a fixed background

    A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters

    No full text
    Abstract. Since precise shadows can be generated in real-time, graphics applications often use the shadow volume algorithm. This algorithm was limited to manifold casters, but recently has been extended to general non-manifold casters with oriented triangles. We provide a further extension to general non-manifold meshes and an additional extension to shadows of transparent casters. To achieve these, we first introduce a generalization of an object’s silhouette to non-manifold meshes. By using this generalization, we can compute the number of caster surfaces between the light and receiver, and furthermore, we can compute the light intensity arrived at the receiver fragments after the light has traveled through multiple colored transparent receiver surfaces. By using these extensions, shadows can be generated from transparent casters that have constant color and opacity. Introduction. In real-time graphics applications such as games, shadows of various objects add important visual realism and provide additional information on the spatial relationships between objects in the scene. For example, a shadow drawn at the foot of a game character makes the user believe that the foot is resting on th
    corecore